Guided Dropout: Improving Deep Networks Without Increased Computation

نویسندگان

چکیده

Deep convolution neural networks are going deeper and deeper. However, the complexity of models is prone to overfitting in training. Dropout, one crucial tricks, prevents units from co-adapting too much by randomly dropping neurons during It effectively improves performance deep but ignores importance differences between neurons. To optimize this issue, paper presents a new dropout method called guided dropout, which selects switch off according kernel preserves informative uses an unsupervised clustering algorithm cluster similar each hidden layer, certain probability within cluster. Thereby would preserve layer with different roles while maintaining model’s scarcity generalization, role learning features. We evaluated our approach compared two standard on three well-established public object detection datasets. Experimental results multiple datasets show that proposed has been improved false positives, precision-recall curve average precision without increasing amount computation. can be seen increased thanks shallow networks. The concept beneficial other vision tasks.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dropout Rademacher Complexity Of Deep Neural Networks

Great successes of deep neural networks have been witnessed in various real applications. Many algorithmic and implementation techniques have been developed; however, theoretical understanding of many aspects of deep neural networks is far from clear. A particular interesting issue is the usefulness of dropout, which was motivated from the intuition of preventing complex co-adaptation of featur...

متن کامل

Adaptive dropout for training deep neural networks

Recently, it was shown that deep neural networks can perform very well if the activities of hidden units are regularized during learning, e.g, by randomly dropping out 50% of their activities. We describe a method called ‘standout’ in which a binary belief network is overlaid on a neural network and is used to regularize of its hidden units by selectively setting activities to zero. This ‘adapt...

متن کامل

Surprising properties of dropout in deep networks

We analyze dropout in deep networks with rectified linear units and the quadratic loss. Our results expose surprising differences between the behavior of dropout and more traditional regularizers like weight decay. For example, on some simple data sets dropout training produces negative weights even though the output is the sum of the inputs. This provides a counterpoint to the suggestion that ...

متن کامل

Variational Dropout Sparsifies Deep Neural Networks

We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse sol...

متن کامل

Deep Counterfactual Networks with Propensity-Dropout

We propose a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data. Our approach conceptualizes causal inference as a multitask learning problem; we model a subject’s potential outcomes using a deep multitask network with a set of shared layers among the factual and counterfactual outcomes, and a set of outcome-specific layers. The ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Intelligent Automation and Soft Computing

سال: 2023

ISSN: ['2326-005X', '1079-8587']

DOI: https://doi.org/10.32604/iasc.2023.033286